61 research outputs found

    A Similarity Measure for Material Appearance

    Get PDF
    We present a model to measure the similarity in appearance between different materials, which correlates with human similarity judgments. We first create a database of 9,000 rendered images depicting objects with varying materials, shape and illumination. We then gather data on perceived similarity from crowdsourced experiments; our analysis of over 114,840 answers suggests that indeed a shared perception of appearance similarity exists. We feed this data to a deep learning architecture with a novel loss function, which learns a feature space for materials that correlates with such perceived appearance similarity. Our evaluation shows that our model outperforms existing metrics. Last, we demonstrate several applications enabled by our metric, including appearance-based search for material suggestions, database visualization, clustering and summarization, and gamut mapping.Comment: 12 pages, 17 figure

    Learning icons appearance similarity

    Get PDF
    Selecting an optimal set of icons is a crucial step in the pipeline of visual design to structure and navigate through content. However, designing the icons sets is usually a difficult task for which expert knowledge is required. In this work, to ease the process of icon set selection to the users, we propose a similarity metric which captures the properties of style and visual identity. We train a Siamese Neural Network with an online dataset of icons organized in visually coherent collections that are used to adaptively sample training data and optimize the training process. As the dataset contains noise, we further collect human-rated information on the perception of icon's similarity which will be used for evaluating and testing the proposed model. We present several results and applications based on searches, kernel visualizations and optimized set proposals that can be helpful for designers and non-expert users while exploring large collections of icons.Comment: 12 pages, 11 figure

    How Will It Drape Like? Capturing Fabric Mechanics from Depth Images

    Full text link
    We propose a method to estimate the mechanical parameters of fabrics using a casual capture setup with a depth camera. Our approach enables to create mechanically-correct digital representations of real-world textile materials, which is a fundamental step for many interactive design and engineering applications. As opposed to existing capture methods, which typically require expensive setups, video sequences, or manual intervention, our solution can capture at scale, is agnostic to the optical appearance of the textile, and facilitates fabric arrangement by non-expert operators. To this end, we propose a sim-to-real strategy to train a learning-based framework that can take as input one or multiple images and outputs a full set of mechanical parameters. Thanks to carefully designed data augmentation and transfer learning protocols, our solution generalizes to real images despite being trained only on synthetic data, hence successfully closing the sim-to-real loop.Key in our work is to demonstrate that evaluating the regression accuracy based on the similarity at parameter space leads to an inaccurate distances that do not match the human perception. To overcome this, we propose a novel metric for fabric drape similarity that operates on the image domain instead on the parameter space, allowing us to evaluate our estimation within the context of a similarity rank. We show that out metric correlates with human judgments about the perception of drape similarity, and that our model predictions produce perceptually accurate results compared to the ground truth parameters.Comment: 12 pages, 12 figures. Accepted to EUROGRAPHICS 2023. Project website: https://carlosrodriguezpardo.es/projects/MechFromDepth

    CGIntrinsics: Better Intrinsic Image Decomposition through Physically-Based Rendering

    Full text link
    Intrinsic image decomposition is a challenging, long-standing computer vision problem for which ground truth data is very difficult to acquire. We explore the use of synthetic data for training CNN-based intrinsic image decomposition models, then applying these learned models to real-world images. To that end, we present \ICG, a new, large-scale dataset of physically-based rendered images of scenes with full ground truth decompositions. The rendering process we use is carefully designed to yield high-quality, realistic images, which we find to be crucial for this problem domain. We also propose a new end-to-end training method that learns better decompositions by leveraging \ICG, and optionally IIW and SAW, two recent datasets of sparse annotations on real-world images. Surprisingly, we find that a decomposition network trained solely on our synthetic data outperforms the state-of-the-art on both IIW and SAW, and performance improves even further when IIW and SAW data is added during training. Our work demonstrates the suprising effectiveness of carefully-rendered synthetic data for the intrinsic images task.Comment: Paper for 'CGIntrinsics: Better Intrinsic Image Decomposition through Physically-Based Rendering' published in ECCV, 201

    PRKG2 Splice Site Variant in Dogo Argentino Dogs with Disproportionate Dwarfism

    Get PDF
    Dwarfism phenotypes occur in many species and may be caused by genetic or environmental factors. In this study, we investigated a family of nine Dogo Argentino dogs, in which two dogs were affected by disproportionate dwarfism. Radiographs of an affected dog revealed a decreased level of endochondral ossification in its growth plates, and a premature closure of the distal ulnar physes. The pedigree of the dogs presented evidence of monogenic autosomal recessive inheritance; combined linkage and homozygosity mapping assigned the most likely position of a potential genetic defect to 34 genome segments, totaling 125 Mb. The genome of an affected dog was sequenced and compared to 795 control genomes. The prioritization of private variants revealed a clear top candidate variant for the observed dwarfism. This variant, PRKG2:XM_022413533.1:c.1634+1G>T, affects the splice donor site and is therefore predicted to disrupt the function of the PKRG2 gene encoding protein, kinase cGMP-dependent type 2, a known regulator of chondrocyte differentiation. The genotypes of the PRKG2 variant were perfectly associated with the phenotype in the studied family of dogs. PRKG2 loss-of-function variants were previously reported to cause disproportionate dwarfism in humans, cattle, mice, and rats. Together with the comparative data from other species, our data strongly suggest PRKG2:c.1634+1G>T to be a candidate causative variant for the observed dwarfism phenotype in Dogo Argentino dogs
    corecore